The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
We introduce Action-GPT, a plug and play framework for incorporating Large Language Models (LLMs) into text-based action generation models. Action phrases in current motion capture datasets contain minimal and to-the-point information. By carefully crafting prompts for LLMs, we generate richer and fine-grained descriptions of the action. We show that utilizing these detailed descriptions instead of the original action phrases leads to better alignment of text and motion spaces. Our experiments show qualitative and quantitative improvement in the quality of synthesized motions produced by recent text-to-motion models. Code, pretrained models and sample videos will be made available at https://actiongpt.github.io
translated by 谷歌翻译
Pictionary, the popular sketch-based guessing game, provides an opportunity to analyze shared goal cooperative game play in restricted communication settings. However, some players occasionally draw atypical sketch content. While such content is occasionally relevant in the game context, it sometimes represents a rule violation and impairs the game experience. To address such situations in a timely and scalable manner, we introduce DrawMon, a novel distributed framework for automatic detection of atypical sketch content in concurrently occurring Pictionary game sessions. We build specialized online interfaces to collect game session data and annotate atypical sketch content, resulting in AtyPict, the first ever atypical sketch content dataset. We use AtyPict to train CanvasNet, a deep neural atypical content detection network. We utilize CanvasNet as a core component of DrawMon. Our analysis of post deployment game session data indicates DrawMon's effectiveness for scalable monitoring and atypical sketch content detection. Beyond Pictionary, our contributions also serve as a design guide for customized atypical content response systems involving shared and interactive whiteboards. Code and datasets are available at https://drawm0n.github.io.
translated by 谷歌翻译
与计算机视觉合并的基于无人机的遥感系统(UAV)遥感系统具有协助建筑物建设和灾难管理的潜力,例如地震期间的损害评估。可以通过检查来评估建筑物到地震的脆弱性,该检查考虑到相关组件的预期损害进展以及组件对结构系统性能的贡献。这些检查中的大多数是手动进行的,导致高利用人力,时间和成本。本文提出了一种通过基于无人机的图像数据收集和用于后处理的软件库来自动化这些检查的方法,该方法有助于估算地震结构参数。这里考虑的关键参数是相邻建筑物,建筑计划形状,建筑计划区域,屋顶上的对象和屋顶布局之间的距离。通过使用距离测量传感器以及通过Google Earth获得的数据进行的现场测量,可以验证所提出的方法在估计上述参数估算上述参数方面的准确性。可以从https://uvrsabi.github.io/访问其他详细信息和代码。
translated by 谷歌翻译
基于姿势的动作识别主要是通过以整体化处理输入骨骼的方法来解决的,即姿势树中的关节是整体处理的。但是,这种方法忽略了这样一个事实,即行动类别通常以局部动力动力学为特征,这些动力动力学仅涉及涉及手(例如“竖起大拇指”)或腿部(例如``踢'')的零件联合组的小子集。尽管存在基于部分组的方法,但在全球姿势框架内并未考虑每个部分组,从而导致这种方法缺乏。此外,常规方法采用独立的方式流(例如关节,骨,关节速度,骨速度),并在这些流中多次训练网络,从而大大增加了训练参数的数量。为了解决这些问题,我们介绍了PSUMNET,这是一种新颖的方法,用于可扩展有效的基于姿势的动作识别。在表示级别,我们提出了一种基于全球框架的部分流方法,而不是基于常规模态流。在每个部分流中,从多种模式的相关数据被处理管道统一和消耗。在实验上,PSumnet在广泛使用的NTURGB+D 60/120数据集和密集的关节骨架数据集NTU 60-X/120-X上实现了最先进的性能。 PSUMNET高效,优于竞争方法,使用100%-400%的参数。 PSUMNET还概括为具有竞争性能的SHREC手势数据集。总体而言,PSUMNET的可伸缩性,性能和效率使其成为动作识别以及在Compute限制的嵌入式和边缘设备上部署的吸引人选择。可以在https://github.com/skelemoa/psumnet上访问代码和预算模型
translated by 谷歌翻译
我们认为在数据异质性下实现联合学习(FL)的公平分类问题。为公平分类提出的大多数方法都需要不同的数据,这些数据代表了所涉及的不同人口群体。相比之下,每个客户端都是拥有仅代表单个人口统计组的数据。因此,在客户级别的公平分类模型无法采用现有方法。为了解决这一挑战,我们提出了几种聚合技术。我们通过比较Celeba,UTK和Fairace数据集上产生的公平度量和准确性来凭经验验证这些技术。
translated by 谷歌翻译
缺乏细粒度的关节(面部接头,手指)是艺术骨架动作识别模型的基本性能瓶颈。尽管瓶颈,但社区的努力似乎只是在提出新颖的建筑方面投入。为了具体地解决这个瓶颈,我们介绍了两个基于姿势的人类行动数据集 - NTU60-X和NTU120-x。我们的数据集扩展了最大的现有动作识别数据集NTU-RGBD。除了在NTU-RGBD中的每个骨架的25个主体关节之外,NTU60-X和NTU120-X数据集包括手指和面部接头,从而实现更丰富的骨架表示。我们适当地修改现有技术方法以使用引入的数据集实现培训。我们的结果展示了这些NTU-X数据集在克服上述瓶颈方面的有效性,并在先前最糟糕的行动类别中提高了最糟糕的瓶颈。可以在https://github.com/skelemoa/ntu-x找到代码和预磨料模型。
translated by 谷歌翻译
浅水深度图像使对象保持焦点,前景和背景背景模糊。这种效果需要比智能手机摄像机更大的镜头光圈。常规方法根据其深度获取RGB-D图像和模糊图像区域。但是,这种方法不适用于反射性或透明的表面,也不适用于深度值不准确或模棱两可的细微详细的对象轮廓。我们提出了一种基于学习的方法,可以在用单个小光圈镜头获得的手持式爆发中综合降水模糊。我们的深度学习模型直接产生了浅水深度图像,避免了明显的基于深度的模糊。模拟的孔径直径等于爆发过程中的相机翻译。由于不准确或模棱两可的深度估计,我们的方法不会遭受伪影的困扰,并且非常适合肖像摄影。
translated by 谷歌翻译
现代对象检测体系结构正朝着采用自我监督的学习(SSL)来通过相关的借口任务来提高性能检测。文献中尚未探讨单眼3D对象检测的借口任务。本文研究了已建立的自我监督边界框的应用,通过将随机窗口标记为借口任务来回收。训练了3D检测器的分类器头,以对包含不同比例的地面真相对象的随机窗口进行分类,从而处理前后背景的不平衡。我们使用RTM3D检测模型作为基线评估借口任务,并在应用数据增强的情况下评估。我们证明,在基线得分上,使用SSL在MAP 3D中的2-3%和0.9-1.5%的BEV得分之间的提高。我们提出了反向类频率重新加权(ICFW)地图分数,该分数突出显示了具有长尾巴的类不平衡数据集中低频类检测的改进。我们证明了ICFW的改进MAP 3D和BEV分数,以考虑Kitti验证数据集中的类不平衡。通过借口任务,我们看到ICFW指标增加了4-5%。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译